[Bugfix] Fix a bad import#29694
Conversation
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| from vllm.logger import init_logger | ||
| from vllm.transformers_utils.processor import cached_processor_from_config | ||
| from vllm.transformers_utils.input_processor import cached_processor_from_config |
There was a problem hiding this comment.
Fix broken cached_processor import
The new import points to vllm.transformers_utils.input_processor, but no such module exists in this repo and cached_processor_from_config is still defined in transformers_utils/processor.py. As a result, importing vllm.multimodal.processing (e.g., when enabling multimodal pipelines) will now raise ModuleNotFoundError before any processing logic runs.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Code Review
This pull request addresses a bug by correcting an import path in vllm/multimodal/processing.py. The cached_processor_from_config function is now imported from vllm.transformers_utils.input_processor instead of vllm.transformers_utils.processor. This change aligns with a likely refactoring where the function was moved to a new module. The fix is straightforward and appears correct, ensuring that the multimodal processing logic uses the intended utility function from its canonical source.
Purpose
False alarm, closing
Test Plan
Test Result
Essential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model.